Convergence Rate of Coefficient Regularized Kernel-based Learning Algorithms
نویسندگان
چکیده
منابع مشابه
Convergence Rate of Coefficient Regularized Kernel-based Learning Algorithms
We investigate machine learning for the least square regression with data dependent hypothesis and coefficient regularization algorithms based on general kernels. We provide some estimates for the learning raters of both regression and classification when the hypothesis spaces are sample dependent. Under a weak condition on the kernels we derive learning error by estimating the rate of some K-f...
متن کاملOn the convergence rate of lp-norm multiple kernel learning
We derive an upper bound on the local Rademacher complexity of lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p= 1 only while our analysis covers all cases 1≤ p≤ ∞, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows...
متن کاملMultitask Learning Using Regularized Multiple Kernel Learning
Empirical success of kernel-based learning algorithms is very much dependent on the kernel function used. Instead of using a single fixed kernel function, multiple kernel learning (MKL) algorithms learn a combination of different kernel functions in order to obtain a similarity measure that better matches the underlying problem. We study multitask learning (MTL) problems and formulate a novel M...
متن کاملSparse Multiple Kernel Learning with Geometric Convergence Rate
In this paper, we study the problem of sparse multiple kernel learning (MKL), where the goal is to efficiently learn a combination of a fixed small number of kernels from a large pool that could lead to a kernel classifier with a small prediction error. We develop an efficient algorithm based on the greedy coordinate descent algorithm, that is able to achieve a geometric convergence rate under ...
متن کاملRegularized Kernel Algorithms for Support Estimation
In the framework of non-parametric support estimation, we study the statistical properties of a set estimator defined by means of Kernel Principal Component Analysis. Under a suitable assumption on the kernel, we prove that the algorithm is strongly consistent with respect to the Hausdorff distance. We also extend the above analysis to a larger class of set estimators defined in terms of a low-...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied Mathematics & Information Sciences
سال: 2014
ISSN: 1935-0090,2325-0399
DOI: 10.12785/amis/080251